22 research outputs found
PYRO-NN: Python Reconstruction Operators in Neural Networks
Purpose: Recently, several attempts were conducted to transfer deep learning
to medical image reconstruction. An increasingly number of publications follow
the concept of embedding the CT reconstruction as a known operator into a
neural network. However, most of the approaches presented lack an efficient CT
reconstruction framework fully integrated into deep learning environments. As a
result, many approaches are forced to use workarounds for mathematically
unambiguously solvable problems. Methods: PYRO-NN is a generalized framework to
embed known operators into the prevalent deep learning framework Tensorflow.
The current status includes state-of-the-art parallel-, fan- and cone-beam
projectors and back-projectors accelerated with CUDA provided as Tensorflow
layers. On top, the framework provides a high level Python API to conduct FBP
and iterative reconstruction experiments with data from real CT systems.
Results: The framework provides all necessary algorithms and tools to design
end-to-end neural network pipelines with integrated CT reconstruction
algorithms. The high level Python API allows a simple use of the layers as
known from Tensorflow. To demonstrate the capabilities of the layers, the
framework comes with three baseline experiments showing a cone-beam short scan
FDK reconstruction, a CT reconstruction filter learning setup, and a TV
regularized iterative reconstruction. All algorithms and tools are referenced
to a scientific publication and are compared to existing non deep learning
reconstruction frameworks. The framework is available as open-source software
at \url{https://github.com/csyben/PYRO-NN}. Conclusions: PYRO-NN comes with the
prevalent deep learning framework Tensorflow and allows to setup end-to-end
trainable neural networks in the medical image reconstruction context. We
believe that the framework will be a step towards reproducible researchComment: V1: Submitted to Medical Physics, 11 pages, 7 figure
Precision Learning: Towards Use of Known Operators in Neural Networks
In this paper, we consider the use of prior knowledge within neural networks.
In particular, we investigate the effect of a known transform within the
mapping from input data space to the output domain. We demonstrate that use of
known transforms is able to change maximal error bounds.
In order to explore the effect further, we consider the problem of X-ray
material decomposition as an example to incorporate additional prior knowledge.
We demonstrate that inclusion of a non-linear function known from the physical
properties of the system is able to reduce prediction errors therewith
improving prediction quality from SSIM values of 0.54 to 0.88.
This approach is applicable to a wide set of applications in physics and
signal processing that provide prior knowledge on such transforms. Also maximal
error estimation and network understanding could be facilitated within the
context of precision learning.Comment: accepted on ICPR 201
Projection image-to-image translation in hybrid X-ray/MR imaging
The potential benefit of hybrid X-ray and MR imaging in the interventional
environment is large due to the combination of fast imaging with high contrast
variety. However, a vast amount of existing image enhancement methods requires
the image information of both modalities to be present in the same domain. To
unlock this potential, we present a solution to image-to-image translation from
MR projections to corresponding X-ray projection images. The approach is based
on a state-of-the-art image generator network that is modified to fit the
specific application. Furthermore, we propose the inclusion of a gradient map
in the loss function to allow the network to emphasize high-frequency details
in image generation. Our approach is capable of creating X-ray projection
images with natural appearance. Additionally, our extensions show clear
improvement compared to the baseline method.Comment: In proceedings of SPIE Medical Imaging 201
Deep Learning-based Patient Re-identification Is able to Exploit the Biometric Nature of Medical Chest X-ray Data
With the rise and ever-increasing potential of deep learning techniques in
recent years, publicly available medical datasets became a key factor to enable
reproducible development of diagnostic algorithms in the medical domain.
Medical data contains sensitive patient-related information and is therefore
usually anonymized by removing patient identifiers, e.g., patient names before
publication. To the best of our knowledge, we are the first to show that a
well-trained deep learning system is able to recover the patient identity from
chest X-ray data. We demonstrate this using the publicly available large-scale
ChestX-ray14 dataset, a collection of 112,120 frontal-view chest X-ray images
from 30,805 unique patients. Our verification system is able to identify
whether two frontal chest X-ray images are from the same person with an AUC of
0.9940 and a classification accuracy of 95.55%. We further highlight that the
proposed system is able to reveal the same person even ten and more years after
the initial scan. When pursuing a retrieval approach, we observe an mAP@R of
0.9748 and a precision@1 of 0.9963. Furthermore, we achieve an AUC of up to
0.9870 and a precision@1 of up to 0.9444 when evaluating our trained networks
on external datasets such as CheXpert and the COVID-19 Image Data Collection.
Based on this high identification rate, a potential attacker may leak
patient-related information and additionally cross-reference images to obtain
more information. Thus, there is a great risk of sensitive content falling into
unauthorized hands or being disseminated against the will of the concerned
patients. Especially during the COVID-19 pandemic, numerous chest X-ray
datasets have been published to advance research. Therefore, such data may be
vulnerable to potential attacks by deep learning-based re-identification
algorithms.Comment: Published in Scientific Report
Projection-to-Projection Translation for Hybrid X-ray and Magnetic Resonance Imaging
Hybrid X-ray and magnetic resonance (MR) imaging promises large potential in interventional medical imaging applications due to the broad variety of contrast of MRI combined with fast imaging of X-ray-based modalities. To fully utilize the potential of the vast amount of existing image enhancement techniques, the corresponding information from both modalities must be present in the same domain. For image-guided interventional procedures, X-ray fluoroscopy has proven to be the modality of choice. Synthesizing one modality from another in this case is an ill-posed problem due to ambiguous signal and overlapping structures in projective geometry. To take on these challenges, we present a learning-based solution to MR to X-ray projection-to-projection translation. We propose an image generator network that focuses on high representation capacity in higher resolution layers to allow for accurate synthesis of fine details in the projection images. Additionally, a weighting scheme in the loss computation that favors high-frequency structures is proposed to focus on the important details and contours in projection imaging. The proposed extensions prove valuable in generating X-ray projection images with natural appearance. Our approach achieves a deviation from the ground truth of only 6% and structural similarity measure of 0.913 ± 0.005. In particular the high frequency weighting assists in generating projection images with sharp appearance and reduces erroneously synthesized fine details